【大数据学习篇2】Hadoop集群安装(三) |
您所在的位置:网站首页 › jps 怎么安装视频 › 【大数据学习篇2】Hadoop集群安装(三) |
7.3 改目录名称 hd@master apps]$ mv hadoop-3.0.0 hadoop [hd@master apps]$ ll total 324644 drwxr-xr-x. 12 hd hd 192 Jul 11 00:09 hadoop 7.4 修改hadoop配置文件7.4.1 修改hadoop-env.sh[hd@master ~]$ cd /home/hd/apps/hadoop/etc/hadoop/ [hd@master hadoop]$ pwd /home/hd/apps/hadoop/etc/hadoop [hd@master hadoop]$ vi hadoop-env.sh #在文件的尾部(按“G”可以跳到文档的尾部),增加 export JAVA_HOME=/home/hd/apps/java 7.4.2 修改core-site.xml
fs.defaultFS hdfs://master:9000
hadoop.tmp.dir /home/hd/apps/hadoop/tmpdata 7.4.3 修改hdfs-site.xml
dfs.replication 2
dfs.namenode.http-address master:50070
dfs.namenode.secondary.http-address master:50090
dfs.namenode.name.dir /home/hd/apps/hadoop/namenode
dfs.datanode.data.dir /home/hd/apps/hadoop/datanode 7.4.4 修改mapred-site.xml
mapreduce.framework.name yarn
yarn.app.mapreduce.am.env HADOOP_MAPRED_HOME=/home/hd/apps/hadoop
mapreduce.map.env HADOOP_MAPRED_HOME=/home/hd/apps/hadoop
mapreduce.reduce.env HADOOP_MAPRED_HOME=/home/hd/apps/hadoop 7.4.5 修改yarn-site.xml
yarn.resourcemanager.hostname master
yarn.nodemanager.aux-services mapreduce_shuffle 7.4.6 修改workers [hd@master hadoop]$ vi workers slave01 slave02 7.4.7 修改环境变量[hd@master hadoop]$ su root Password: [root@master hadoop]# vi /etc/profile #增加 export HADOOP_HOME=/home/hd/apps/hadoop #增加 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin 7.5 拷贝到第二,三台机[root@master hadoop]# su hd [hd@master hadoop]$ scp -r /home/hd/apps/hadoop hd@slave01:/home/hd/apps/ [hd@master hadoop]$ scp -r /home/hd/apps/hadoop hd@slave02:/home/hd/apps/ [hd@master hadoop]$ su root Password: [root@master hadoop]# scp /etc/profile root@slave01:/etc/ root@slave01's password: profile 100% 1896 1.9KB/s 00:00 [root@master hadoop]# scp /etc/profile root@slave02:/etc/ profile 100% 1896 1.9KB/s 00:00 在第三台机器里加载环境 [root@master hadoop]# source /etc/profile [hd@master hadoop]$ hadoop version Hadoop 3.0.0 7.6 格式化[hd@master hadoop]$ ll /home/hd/apps/hadoop/namenode ls: cannot access /home/hd/apps/hadoop/namenode: No such file or directory [hd@master hadoop]$ hadoop namenode -format 7.7 启动hadoopstart-dfs.sh 启动HDFS分布式文件系统,停止stop-dfs.sh start-yarn.sh 启动Yarn资源管理器,停止stop-yarn.sh start-all.sh HDFS分布式文件系统与Yarn启动,停止stop-all.sh 7.8 jps查看进程[hd@master ~]$ jps 23668 SecondaryNameNode 23467 NameNode 23903 ResourceManager 24207 Jps [hd@slave01 ~]$ jps 22341 DataNode 22649 Jps 22458 NodeManager [hd@slave02 ~]$ jps 23367 Jps 23176 NodeManager 23051 DataNode 7.9 测试hdfs 文件系统访问地址:http://192.168.126.128:50070/dfshealth.html#tab-overview Yarn资源管理器访问地址:http://192.168.126.128:8088/cluster |
今日新闻 |
推荐新闻 |
CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3 |